Product Code Database
Example Keywords: handheld -gps $89-160
barcode-scavenger
   » » Wiki: Diagonal Matrix
Tag Wiki 'Diagonal Matrix'.
Tag

In , a diagonal matrix is a matrix in which the entries outside the are all zero; the term usually refers to . Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is \left[\begin{smallmatrix} 3 & 0 \\ 0 & 2 \end{smallmatrix}\right], while an example of a 3×3 diagonal matrix is \left[\begin{smallmatrix} 6 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 4 \end{smallmatrix}\right]. An of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, \left[\begin{smallmatrix} 0.5 & 0 \\ 0 & 0.5 \end{smallmatrix}\right]. In , a diagonal matrix may be used as a , since matrix multiplication with it results in changing scale (size) and possibly also ; only a scalar matrix results in uniform change in scale.


Definition
As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with columns and rows is diagonal if \forall i,j \in \{1, 2, \ldots, n\}, i \ne j \implies d_{i,j} = 0.

However, the main diagonal entries are unrestricted.

The term diagonal matrix may sometimes refer to a , which is an -by- matrix with all the entries not of the form being zero. For example: \begin{bmatrix}

 1 & 0 & 0\\
 0 & 4 & 0\\
 0 & 0 & -3\\
 0 & 0 & 0\\
     
\end{bmatrix} \quad \text{or} \quad \begin{bmatrix}
 1 & 0 & 0 & 0 & 0\\
 0 & 4 & 0& 0 & 0\\
 0 & 0 & -3& 0 & 0
     
\end{bmatrix}

More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a '. A square diagonal matrix is a , so this can also be called a '.

The following matrix is square diagonal matrix: \begin{bmatrix} 1 & 0 & 0\\ 0 & 4 & 0\\ 0 & 0 & -2 \end{bmatrix}

If the entries are or , then it is a as well.

In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices".


Vector-to-matrix diag operator
A diagonal matrix can be constructed from a vector \mathbf{a} = \begin{bmatrix}a_1 & \dots & a_n\end{bmatrix}^\textsf{T} using the \operatorname{diag} operator:
\mathbf{D} = \operatorname{diag}(a_1, \dots, a_n).
     

This may be written more compactly as \mathbf{D} = \operatorname{diag}(\mathbf{a}).

The same operator is also used to represent block diagonal matrices as \mathbf{A} = \operatorname{diag}(\mathbf A_1, \dots, \mathbf A_n) where each argument is a matrix.

The operator may be written as

\operatorname{diag}(\mathbf{a}) = \left(\mathbf{a} \mathbf{1}^\textsf{T}\right) \circ \mathbf{I},
     
where \circ represents the Hadamard product, and is a constant vector with elements 1.


Matrix-to-vector diag operator
The inverse matrix-to-vector operator is sometimes denoted by the identically named \operatorname{diag}(\mathbf{D}) = \begin{bmatrix}a_1 & \dots & a_n\end{bmatrix}^\textsf{T}, where the argument is now a matrix, and the result is a vector of its diagonal entries.

The following property holds:

\operatorname{diag}(\mathbf{A}\mathbf{B}) = \sum_j \left(\mathbf{A} \circ \mathbf{B}^\textsf{T}\right)_{ij} = \left( \mathbf{A} \circ \mathbf{B}^\textsf{T} \right) \mathbf{1}.
     


Scalar matrix
A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple of the . Its effect on a vector is scalar multiplication by . For example, a 3×3 scalar matrix has the form:
 \begin{bmatrix}
   \lambda &       0 & 0       \\
         0 & \lambda & 0       \\
         0 &       0 & \lambda
 \end{bmatrix} \equiv \lambda \boldsymbol{I}_3
     

The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its is the set of diagonal matrices). That is because if a diagonal matrix \mathbf{D} = \operatorname{diag}(a_1, \dots, a_n) has a_i \neq a_j, then given a matrix with m_{ij} \neq 0, the term of the products are: (\mathbf{DM})_{ij} = a_im_{ij} and (\mathbf{MD})_{ij} = m_{ij}a_j, and a_jm_{ij} \neq m_{ij}a_i (since one can divide by ), so they do not commute unless the off-diagonal terms are zero. Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices.

For an abstract vector space (rather than the concrete vector space ), the analog of scalar matrices are scalar transformations. This is true more generally for a module over a ring , with the endomorphism algebra (algebra of linear operators on ) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map R \to \operatorname{End}(M), (from a scalar to its corresponding scalar transformation, multiplication by ) exhibiting as a -algebra. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, scalar invertible transforms are the center of the general linear group . The former is more generally true M \cong R^n, for which the endomorphism algebra is isomorphic to a matrix algebra.


Vector operations
Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix \mathbf{D} = \operatorname{diag}(a_1, \dots, a_n) and a vector \mathbf{v} = \begin{bmatrix} x_1 & \dotsm & x_n \end{bmatrix}^\textsf{T}, the product is: \mathbf{D}\mathbf{v} = \operatorname{diag}(a_1, \dots, a_n)\begin{bmatrix}x_1 \\ \vdots \\ x_n\end{bmatrix} =
 \begin{bmatrix}
   a_1 \\
       & \ddots \\
       &        & a_n
 \end{bmatrix}
 \begin{bmatrix}x_1 \\ \vdots \\ x_n\end{bmatrix} =
 \begin{bmatrix}a_1 x_1 \\ \vdots \\ a_n x_n\end{bmatrix}.
     

This can be expressed more compactly by using a vector instead of a diagonal matrix, \mathbf{d} = \begin{bmatrix} a_1 & \dotsm & a_n \end{bmatrix}^\textsf{T}, and taking the Hadamard product of the vectors (entrywise product), denoted \mathbf{d} \circ \mathbf{v}:

\mathbf{D}\mathbf{v} = \mathbf{d} \circ \mathbf{v} =

 \begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix} \circ \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} =
 \begin{bmatrix} a_1 x_1 \\ \vdots \\ a_n x_n \end{bmatrix}.
     

This is mathematically equivalent, but avoids storing all the zero terms of this . This product is thus used in , such as computing products of derivatives in or multiplying IDF weights in ,

(2009). 9781420059458, CRC Press. .
since some frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly.


Matrix operations
The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write for a diagonal matrix whose diagonal entries starting in the upper left corner are . Then, for , we have

 \operatorname{diag}(a_1,\, \ldots,\, a_n) + \operatorname{diag}(b_1,\, \ldots,\, b_n) = \operatorname{diag}(a_1 + b_1,\, \ldots,\, a_n + b_n)
     

and for matrix multiplication,

\operatorname{diag}(a_1,\, \ldots,\, a_n) \operatorname{diag}(b_1,\, \ldots,\, b_n) = \operatorname{diag}(a_1 b_1,\, \ldots,\, a_n b_n).

The diagonal matrix is invertible if and only if the entries are all nonzero. In this case, we have

\operatorname{diag}(a_1,\, \ldots,\, a_n)^{-1} = \operatorname{diag}(a_1^{-1},\, \ldots,\, a_n^{-1}).

In particular, the diagonal matrices form a of the ring of all -by- matrices.

Multiplying an -by- matrix from the left with amounts to multiplying the -th row of by for all ; multiplying the matrix from the right with amounts to multiplying the -th column of by for all .


Operator matrix in eigenbasis
As explained in determining coefficients of operator matrix, there is a special basis, , for which the matrix takes the diagonal form. Hence, in the defining equation \mathbf{Ae}_j = \sum_i a_{i,j} \mathbf e_i, all coefficients with are zero, leaving only one term per sum. The surviving diagonal elements, , are known as eigenvalues and designated with in the equation, which reduces to \mathbf{Ae}_i = \lambda_i \mathbf e_i. The resulting equation is known as eigenvalue equation
(2025). 9780486482125, Dover Publications. .
and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors.

In other words, the of are with associated of .


Properties
  • The of is the product .
  • The of a diagonal matrix is again diagonal.
  • Where all matrices are square,
    • A matrix is diagonal if and only if it is triangular and .
    • A matrix is diagonal if and only if it is both upper- and lower-triangular.
    • A diagonal matrix is .
  • The and are diagonal.
  • A 1×1 matrix is always diagonal.
  • The square of a 2×2 matrix with zero trace is always diagonal.


Applications
Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or by a diagonal matrix.

In fact, a given -by- matrix is to a diagonal matrix (meaning that there is a matrix such that is diagonal) if and only if it has linearly independent eigenvectors. Such matrices are said to be diagonalizable.

Over the field of or numbers, more is true. The says that every is unitarily similar to a diagonal matrix (if then there exists a such that is diagonal). Furthermore, the singular value decomposition implies that for any matrix , there exist unitary matrices and such that is diagonal with positive entries.


Operator theory
In , particularly the study of , operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an of : which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the .

Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.


See also


Notes

Sources
Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs